Light Detection

Light Detection

Our lab also specializes in detecting small objects like Light Sources.

Multi-Source Light Detection with YDLD

This research addresses the problem of multi-source light detection (LD) under night driving environments, focusing on the detection of various light sources such as vehicle lights, traffic signals, and streetlights. Unlike traditional binary light detection tasks, this work aims to classify and detect each light source distinctly.

To support this, the authors introduce a novel dataset named YouTube Driving Light Detection (YDLD), consisting of 3,516 images and 116,028 bounding box annotations for three classes: car lights, traffic lights, and streetlights. These images were collected from real driving videos under diverse night and evening conditions.

YDLD Dataset Examples

Fig 1. Red: Streetlight / Green: Traffic Signal / Blue: Car light (Source: Paper Figure 1)

Since light sources are often very small (very tiny) and visually similar, detection performance is typically poor. To address this, the paper proposes several new methods:

Model Architecture: Semi-Supervised Focal Light Detection (SS-FLD)

The proposed architecture integrates Lightness Focal Loss and Spatial Attention Prior into a semi-supervised object detection (SSOD) framework. It mainly consists of the following three modules:

  1. 1. Lightness Focal Loss (LF Loss)
    Enhances conventional focal loss by:
    • Adding false positive penalty term
    • Incorporating spatial prior maps (φ)
    • Better focusing on difficult-to-classify light sources
  2. 2. Lightness Spatial Attention Prior
    Constructs prior maps indicating probable light source locations using geometric context:
    • Based on camera pose & road structure (e.g., vanishing points)
    • Separate priors for each class (Car light / Traffic light / Streetlight)
    • Helps suppress false positives in unlikely regions
  3. 3. Semi-Supervised Object Detection Framework
    Uses teacher-student paradigm:
    • Teacher generates pseudo-labels from weakly augmented unlabeled data
    • Student is trained using both labeled and pseudo-labeled data
    • LF Loss is applied to both labeled and pseudo-labeled samples
SS-FLD Model Architecture

Fig 2. SS-FLD Architecture: Integrating LF Loss and Attention Prior into a Semi-Supervised Detection Pipeline (source: paper Figure 3)

  • Lightness Focal Loss (LF Loss): A modified focal loss that strongly penalizes misclassified or false-positive detections
  • Lightness Spatial Attention Prior: A spatial prior that reflects the geometric relationship between camera and light sources
  • Semi-Supervised Focal Light Detection (SS-FLD): A method that integrates LF loss into semi-supervised training to enhance generalization
LF Loss Comparison

Fig 3. Comparison of standard focal loss and proposed LF loss (Source: Paper Figure 4)

Through extensive experiments with state-of-the-art detectors (Faster R-CNN, YOLOX, DINO, etc.), the proposed SS-FLD achieved the best mAP of 26.0 on the YDLD benchmark. It especially outperformed others in detecting very tiny and small objects (APvt, APt), which are critical in light detection.